Subspace Kernel Discriminant Analysis for Speech Recognition

نویسنده

  • Hakan Erdoğan
چکیده

Kernel Discriminant Analysis (KDA) has been successfully applied to many pattern recognition problems. KDA transforms the original problem into a space of dimension N where N is the number of training vectors. For speech recognition, N is usually prohibitively high increasing computational requirements beyond current computational capabilities. In this paper, we provide a formulation of a subspace version of KDA that enables its application to speech recognition, thus conveniently enabling nonlinear feature space transformations that result in discriminatory lower dimensional features.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Kernel Discriminant Analysis Based on Canonical Differences for Face Recognition in Image Sets

A novel kernel discriminant transformation (KDT) algorithm based on the concept of canonical differences is presented for automatic face recognition applications. For each individual, the face recognition system compiles a multi-view facial image set comprising images with different facial expressions, poses and illumination conditions. Since the multi-view facial images are non-linearly distri...

متن کامل

Subspace-Based Feature Representation and Learning for Language Recognition

This paper presents a novel subspace-based approach for phonotactic language recognition. The whole framework is divided into two parts: the speech feature representation and the subspacebased learning algorithm. First, the phonetic information as well as the contextual relationship, possessed by spoken utterances, are more abundantly retrieved by likelihood computation and feature concatenatio...

متن کامل

Contextual Constraints Based Kernel Discriminant Analysis for Face Recognition

In this paper, an improved subspace learning method using contextual constraints based linear discriminant analysis (CCLDA) is proposed for face recognition. The linear CCLDA approach does not consider the higher order nonlinear information in facial images. However, the wide face variations posed by some factors, such as viewpoint, illumination and expression, existing in non-linear subspaces ...

متن کامل

Kernel Fisher Discriminant Analysis in Full Eigenspace

This work proposes a method which enables us to perform kernel Fisher discriminant analysis in the whole eigenspace for face recognition. It employs the ratio of eigenvalues to decompose the entire kernel feature space into two subspaces: a reliable subspace spanned mainly by the facial variation and an unreliable subspace due to finite number of training samples. Eigenvectors are then scaled u...

متن کامل

A new extension of kernel feature and its application for visual recognition

In this paper, we first conceive a new perception of the kernel feature. The kernel subspace methods can be regarded as two independent steps: an explicit kernel feature extraction step and a linear subspace analysis step on the extracted kernel features. The kernel feature vector of an image is composed of dot products between the image and all the training images using nonlinear dot product k...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004